hello
hello
Labels

📌S Retain class distribution for seed 8:
Class 0: 4500
Class 1: 4500
Class 2: 4500
Class 3: 4500
Class 4: 4500
Class 5: 4500
Class 6: 4500
Class 7: 4500
Class 8: 4500
Class 9: 4500

📌S Forget class distribution for seed 8:
Class 0: 500
Class 1: 500
Class 2: 500
Class 3: 500
Class 4: 500
Class 5: 500
Class 6: 500
Class 7: 500
Class 8: 500
Class 9: 500
78090990

📊 Updated class distribution:
Retain set:
  Class 0: 4750
  Class 1: 4750
  Class 2: 4750
  Class 3: 4750
  Class 4: 4750
  Class 5: 4750
  Class 6: 4750
  Class 7: 4750
  Class 8: 4750
  Class 9: 4750
Forget set:
  Class 0: 250
  Class 1: 250
  Class 2: 250
  Class 3: 250
  Class 4: 250
  Class 5: 250
  Class 6: 250
  Class 7: 250
  Class 8: 250
  Class 9: 250
hello
hello
⚠️ Warning: Retain train loader may not be shuffled.
Training Epoch: 1 [256/47500]	Loss: 2.4219	LR: 0.000000
Training Epoch: 1 [512/47500]	Loss: 2.4659	LR: 0.000538
Training Epoch: 1 [768/47500]	Loss: 2.4357	LR: 0.001075
Training Epoch: 1 [1024/47500]	Loss: 2.3586	LR: 0.001613
Training Epoch: 1 [1280/47500]	Loss: 2.3333	LR: 0.002151
Training Epoch: 1 [1536/47500]	Loss: 2.2795	LR: 0.002688
Training Epoch: 1 [1792/47500]	Loss: 2.2298	LR: 0.003226
Training Epoch: 1 [2048/47500]	Loss: 2.1790	LR: 0.003763
Training Epoch: 1 [2304/47500]	Loss: 2.2789	LR: 0.004301
Training Epoch: 1 [2560/47500]	Loss: 2.1915	LR: 0.004839
Training Epoch: 1 [2816/47500]	Loss: 2.1852	LR: 0.005376
Training Epoch: 1 [3072/47500]	Loss: 2.1908	LR: 0.005914
Training Epoch: 1 [3328/47500]	Loss: 2.1030	LR: 0.006452
Training Epoch: 1 [3584/47500]	Loss: 2.1300	LR: 0.006989
Training Epoch: 1 [3840/47500]	Loss: 2.0796	LR: 0.007527
Training Epoch: 1 [4096/47500]	Loss: 1.9570	LR: 0.008065
Training Epoch: 1 [4352/47500]	Loss: 1.9194	LR: 0.008602
Training Epoch: 1 [4608/47500]	Loss: 1.8485	LR: 0.009140
Training Epoch: 1 [4864/47500]	Loss: 1.9772	LR: 0.009677
Training Epoch: 1 [5120/47500]	Loss: 1.9500	LR: 0.010215
Training Epoch: 1 [5376/47500]	Loss: 1.8293	LR: 0.010753
Training Epoch: 1 [5632/47500]	Loss: 1.7838	LR: 0.011290
Training Epoch: 1 [5888/47500]	Loss: 1.9546	LR: 0.011828
Training Epoch: 1 [6144/47500]	Loss: 1.8128	LR: 0.012366
Training Epoch: 1 [6400/47500]	Loss: 1.7288	LR: 0.012903
Training Epoch: 1 [6656/47500]	Loss: 1.8705	LR: 0.013441
Training Epoch: 1 [6912/47500]	Loss: 1.6909	LR: 0.013978
Training Epoch: 1 [7168/47500]	Loss: 1.7384	LR: 0.014516
Training Epoch: 1 [7424/47500]	Loss: 1.9140	LR: 0.015054
Training Epoch: 1 [7680/47500]	Loss: 1.6399	LR: 0.015591
Training Epoch: 1 [7936/47500]	Loss: 1.6022	LR: 0.016129
Training Epoch: 1 [8192/47500]	Loss: 1.8396	LR: 0.016667
Training Epoch: 1 [8448/47500]	Loss: 1.5979	LR: 0.017204
Training Epoch: 1 [8704/47500]	Loss: 1.8123	LR: 0.017742
Training Epoch: 1 [8960/47500]	Loss: 1.6915	LR: 0.018280
Training Epoch: 1 [9216/47500]	Loss: 1.6612	LR: 0.018817
Training Epoch: 1 [9472/47500]	Loss: 1.6295	LR: 0.019355
Training Epoch: 1 [9728/47500]	Loss: 1.6073	LR: 0.019892
Training Epoch: 1 [9984/47500]	Loss: 1.6228	LR: 0.020430
Training Epoch: 1 [10240/47500]	Loss: 1.5759	LR: 0.020968
Training Epoch: 1 [10496/47500]	Loss: 1.6704	LR: 0.021505
Training Epoch: 1 [10752/47500]	Loss: 1.7014	LR: 0.022043
Training Epoch: 1 [11008/47500]	Loss: 1.5680	LR: 0.022581
Training Epoch: 1 [11264/47500]	Loss: 1.6750	LR: 0.023118
Training Epoch: 1 [11520/47500]	Loss: 1.7697	LR: 0.023656
Training Epoch: 1 [11776/47500]	Loss: 1.5224	LR: 0.024194
Training Epoch: 1 [12032/47500]	Loss: 1.6117	LR: 0.024731
Training Epoch: 1 [12288/47500]	Loss: 1.4873	LR: 0.025269
Training Epoch: 1 [12544/47500]	Loss: 1.6176	LR: 0.025806
Training Epoch: 1 [12800/47500]	Loss: 1.6501	LR: 0.026344
Training Epoch: 1 [13056/47500]	Loss: 1.5451	LR: 0.026882
Training Epoch: 1 [13312/47500]	Loss: 1.6223	LR: 0.027419
Training Epoch: 1 [13568/47500]	Loss: 1.6700	LR: 0.027957
Training Epoch: 1 [13824/47500]	Loss: 1.6002	LR: 0.028495
Training Epoch: 1 [14080/47500]	Loss: 1.6164	LR: 0.029032
Training Epoch: 1 [14336/47500]	Loss: 1.5245	LR: 0.029570
Training Epoch: 1 [14592/47500]	Loss: 1.5323	LR: 0.030108
Training Epoch: 1 [14848/47500]	Loss: 1.5063	LR: 0.030645
Training Epoch: 1 [15104/47500]	Loss: 1.5891	LR: 0.031183
Training Epoch: 1 [15360/47500]	Loss: 1.4367	LR: 0.031720
Training Epoch: 1 [15616/47500]	Loss: 1.5757	LR: 0.032258
Training Epoch: 1 [15872/47500]	Loss: 1.5499	LR: 0.032796
Training Epoch: 1 [16128/47500]	Loss: 1.5096	LR: 0.033333
Training Epoch: 1 [16384/47500]	Loss: 1.5508	LR: 0.033871
Training Epoch: 1 [16640/47500]	Loss: 1.4675	LR: 0.034409
Training Epoch: 1 [16896/47500]	Loss: 1.4955	LR: 0.034946
Training Epoch: 1 [17152/47500]	Loss: 1.6687	LR: 0.035484
Training Epoch: 1 [17408/47500]	Loss: 1.5027	LR: 0.036022
Training Epoch: 1 [17664/47500]	Loss: 1.5511	LR: 0.036559
Training Epoch: 1 [17920/47500]	Loss: 1.5142	LR: 0.037097
Training Epoch: 1 [18176/47500]	Loss: 1.6359	LR: 0.037634
Training Epoch: 1 [18432/47500]	Loss: 1.3558	LR: 0.038172
Training Epoch: 1 [18688/47500]	Loss: 1.5313	LR: 0.038710
Training Epoch: 1 [18944/47500]	Loss: 1.5886	LR: 0.039247
Training Epoch: 1 [19200/47500]	Loss: 1.5006	LR: 0.039785
Training Epoch: 1 [19456/47500]	Loss: 1.5766	LR: 0.040323
Training Epoch: 1 [19712/47500]	Loss: 1.4357	LR: 0.040860
Training Epoch: 1 [19968/47500]	Loss: 1.4520	LR: 0.041398
Training Epoch: 1 [20224/47500]	Loss: 1.5700	LR: 0.041935
Training Epoch: 1 [20480/47500]	Loss: 1.3237	LR: 0.042473
Training Epoch: 1 [20736/47500]	Loss: 1.5659	LR: 0.043011
Training Epoch: 1 [20992/47500]	Loss: 1.6043	LR: 0.043548
Training Epoch: 1 [21248/47500]	Loss: 1.5210	LR: 0.044086
Training Epoch: 1 [21504/47500]	Loss: 1.5660	LR: 0.044624
Training Epoch: 1 [21760/47500]	Loss: 1.4745	LR: 0.045161
Training Epoch: 1 [22016/47500]	Loss: 1.6168	LR: 0.045699
Training Epoch: 1 [22272/47500]	Loss: 1.4617	LR: 0.046237
Training Epoch: 1 [22528/47500]	Loss: 1.4142	LR: 0.046774
Training Epoch: 1 [22784/47500]	Loss: 1.3985	LR: 0.047312
Training Epoch: 1 [23040/47500]	Loss: 1.5462	LR: 0.047849
Training Epoch: 1 [23296/47500]	Loss: 1.3971	LR: 0.048387
Training Epoch: 1 [23552/47500]	Loss: 1.4876	LR: 0.048925
Training Epoch: 1 [23808/47500]	Loss: 1.7248	LR: 0.049462
Training Epoch: 1 [24064/47500]	Loss: 1.4457	LR: 0.050000
Training Epoch: 1 [24320/47500]	Loss: 1.5293	LR: 0.050538
Training Epoch: 1 [24576/47500]	Loss: 1.4471	LR: 0.051075
Training Epoch: 1 [24832/47500]	Loss: 1.3586	LR: 0.051613
Training Epoch: 1 [25088/47500]	Loss: 1.5914	LR: 0.052151
Training Epoch: 1 [25344/47500]	Loss: 1.7206	LR: 0.052688
Training Epoch: 1 [25600/47500]	Loss: 1.4127	LR: 0.053226
Training Epoch: 1 [25856/47500]	Loss: 1.3075	LR: 0.053763
Training Epoch: 1 [26112/47500]	Loss: 1.5461	LR: 0.054301
Training Epoch: 1 [26368/47500]	Loss: 1.5541	LR: 0.054839
Training Epoch: 1 [26624/47500]	Loss: 1.4925	LR: 0.055376
Training Epoch: 1 [26880/47500]	Loss: 1.5161	LR: 0.055914
Training Epoch: 1 [27136/47500]	Loss: 1.4933	LR: 0.056452
Training Epoch: 1 [27392/47500]	Loss: 1.2787	LR: 0.056989
Training Epoch: 1 [27648/47500]	Loss: 1.4166	LR: 0.057527
Training Epoch: 1 [27904/47500]	Loss: 1.4218	LR: 0.058065
Training Epoch: 1 [28160/47500]	Loss: 1.5605	LR: 0.058602
Training Epoch: 1 [28416/47500]	Loss: 1.3214	LR: 0.059140
Training Epoch: 1 [28672/47500]	Loss: 1.2779	LR: 0.059677
Training Epoch: 1 [28928/47500]	Loss: 1.4847	LR: 0.060215
Training Epoch: 1 [29184/47500]	Loss: 1.4158	LR: 0.060753
Training Epoch: 1 [29440/47500]	Loss: 1.3639	LR: 0.061290
Training Epoch: 1 [29696/47500]	Loss: 1.3454	LR: 0.061828
Training Epoch: 1 [29952/47500]	Loss: 1.4627	LR: 0.062366
Training Epoch: 1 [30208/47500]	Loss: 1.3021	LR: 0.062903
Training Epoch: 1 [30464/47500]	Loss: 1.5611	LR: 0.063441
Training Epoch: 1 [30720/47500]	Loss: 1.2168	LR: 0.063978
Training Epoch: 1 [30976/47500]	Loss: 1.4274	LR: 0.064516
Training Epoch: 1 [31232/47500]	Loss: 1.3449	LR: 0.065054
Training Epoch: 1 [31488/47500]	Loss: 1.3363	LR: 0.065591
Training Epoch: 1 [31744/47500]	Loss: 1.4609	LR: 0.066129
Training Epoch: 1 [32000/47500]	Loss: 1.4182	LR: 0.066667
Training Epoch: 1 [32256/47500]	Loss: 1.2442	LR: 0.067204
Training Epoch: 1 [32512/47500]	Loss: 1.4226	LR: 0.067742
Training Epoch: 1 [32768/47500]	Loss: 1.5266	LR: 0.068280
Training Epoch: 1 [33024/47500]	Loss: 1.2984	LR: 0.068817
Training Epoch: 1 [33280/47500]	Loss: 1.5473	LR: 0.069355
Training Epoch: 1 [33536/47500]	Loss: 1.5906	LR: 0.069892
Training Epoch: 1 [33792/47500]	Loss: 1.4261	LR: 0.070430
Training Epoch: 1 [34048/47500]	Loss: 1.5068	LR: 0.070968
Training Epoch: 1 [34304/47500]	Loss: 1.5474	LR: 0.071505
Training Epoch: 1 [34560/47500]	Loss: 1.5097	LR: 0.072043
Training Epoch: 1 [34816/47500]	Loss: 1.3170	LR: 0.072581
Training Epoch: 1 [35072/47500]	Loss: 1.3303	LR: 0.073118
Training Epoch: 1 [35328/47500]	Loss: 1.2971	LR: 0.073656
Training Epoch: 1 [35584/47500]	Loss: 1.2811	LR: 0.074194
Training Epoch: 1 [35840/47500]	Loss: 1.4560	LR: 0.074731
Training Epoch: 1 [36096/47500]	Loss: 1.3685	LR: 0.075269
Training Epoch: 1 [36352/47500]	Loss: 1.5414	LR: 0.075806
Training Epoch: 1 [36608/47500]	Loss: 1.3613	LR: 0.076344
Training Epoch: 1 [36864/47500]	Loss: 1.3812	LR: 0.076882
Training Epoch: 1 [37120/47500]	Loss: 1.5522	LR: 0.077419
Training Epoch: 1 [37376/47500]	Loss: 1.3358	LR: 0.077957
Training Epoch: 1 [37632/47500]	Loss: 1.2941	LR: 0.078495
Training Epoch: 1 [37888/47500]	Loss: 1.2192	LR: 0.079032
Training Epoch: 1 [38144/47500]	Loss: 1.1777	LR: 0.079570
Training Epoch: 1 [38400/47500]	Loss: 1.2578	LR: 0.080108
Training Epoch: 1 [38656/47500]	Loss: 1.2795	LR: 0.080645
Training Epoch: 1 [38912/47500]	Loss: 1.3812	LR: 0.081183
Training Epoch: 1 [39168/47500]	Loss: 1.3472	LR: 0.081720
Training Epoch: 1 [39424/47500]	Loss: 1.2075	LR: 0.082258
Training Epoch: 1 [39680/47500]	Loss: 1.4218	LR: 0.082796
Training Epoch: 1 [39936/47500]	Loss: 1.3565	LR: 0.083333
Training Epoch: 1 [40192/47500]	Loss: 1.2855	LR: 0.083871
Training Epoch: 1 [40448/47500]	Loss: 1.5777	LR: 0.084409
Training Epoch: 1 [40704/47500]	Loss: 1.3361	LR: 0.084946
Training Epoch: 1 [40960/47500]	Loss: 1.4223	LR: 0.085484
Training Epoch: 1 [41216/47500]	Loss: 1.4979	LR: 0.086022
Training Epoch: 1 [41472/47500]	Loss: 1.6530	LR: 0.086559
Training Epoch: 1 [41728/47500]	Loss: 1.5191	LR: 0.087097
Training Epoch: 1 [41984/47500]	Loss: 1.3345	LR: 0.087634
Training Epoch: 1 [42240/47500]	Loss: 1.3417	LR: 0.088172
Training Epoch: 1 [42496/47500]	Loss: 1.2491	LR: 0.088710
Training Epoch: 1 [42752/47500]	Loss: 1.2614	LR: 0.089247
Training Epoch: 1 [43008/47500]	Loss: 1.3721	LR: 0.089785
Training Epoch: 1 [43264/47500]	Loss: 1.2837	LR: 0.090323
Training Epoch: 1 [43520/47500]	Loss: 1.3346	LR: 0.090860
Training Epoch: 1 [43776/47500]	Loss: 1.2930	LR: 0.091398
Training Epoch: 1 [44032/47500]	Loss: 1.2059	LR: 0.091935
Training Epoch: 1 [44288/47500]	Loss: 1.3046	LR: 0.092473
Training Epoch: 1 [44544/47500]	Loss: 1.3377	LR: 0.093011
Training Epoch: 1 [44800/47500]	Loss: 1.3845	LR: 0.093548
Training Epoch: 1 [45056/47500]	Loss: 1.1562	LR: 0.094086
Training Epoch: 1 [45312/47500]	Loss: 1.2456	LR: 0.094624
Training Epoch: 1 [45568/47500]	Loss: 1.2698	LR: 0.095161
Training Epoch: 1 [45824/47500]	Loss: 1.3936	LR: 0.095699
Training Epoch: 1 [46080/47500]	Loss: 1.1740	LR: 0.096237
Training Epoch: 1 [46336/47500]	Loss: 1.1484	LR: 0.096774
Training Epoch: 1 [46592/47500]	Loss: 1.1650	LR: 0.097312
Training Epoch: 1 [46848/47500]	Loss: 1.2024	LR: 0.097849
Training Epoch: 1 [47104/47500]	Loss: 1.1259	LR: 0.098387
Training Epoch: 1 [47360/47500]	Loss: 1.2350	LR: 0.098925
Training Epoch: 1 [47500/47500]	Loss: 1.1761	LR: 0.099462
Epoch 1 - Average Train Loss: 1.5497, Train Accuracy: 0.4432
Epoch 1 training time consumed: 18.66s
Evaluating Network.....
Test set: Epoch: 1, Average loss: 0.0061, Accuracy: 0.4687, Time consumed:0.97s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_02_August_2025_17h_53m_44s/ResNet18-Cifar10-seed8-ret50-1-best.pth
Valid (Test) Dl:  10000
Train Dl:  50000
Retain Train Dl:  47500
Forget Train Dl:  2500
Retain Valid Dl:  47500
Forget Valid Dl:  2500
retain_prob Distribution: 10000 samples
test_prob Distribution: 10000 samples
forget_prob Distribution: 2500 samples
Set1 Distribution: 2500 samples
Set2 Distribution: 2500 samples
Set1 Distribution: 2500 samples
Set2 Distribution: 2500 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Test Accuracy: 46.796875
Retain Accuracy: 48.04795837402344
Zero-Retain Forget (ZRF): 0.9368546009063721
Membership Inference Attack (MIA): 0.394
Forget vs Retain Membership Inference Attack (MIA): 0.596
Forget vs Test Membership Inference Attack (MIA): 0.512
Test vs Retain Membership Inference Attack (MIA): 0.53675
Train vs Test Membership Inference Attack (MIA): 0.49625
Forget Set Accuracy (Df): 47.299110412597656
Method Execution Time: 895.19 seconds
